<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPAW/3RN65ML</identifier>
		<repository>sid.inpe.br/sibgrapi/2018/08.27.17.27</repository>
		<lastupdate>2018:08.27.17.27.39 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2018/08.27.17.27.39</metadatarepository>
		<metadatalastupdate>2022:06.14.00.09.09 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2018}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI.2018.00023</doi>
		<citationkey>CunhaMLTQSSP:2018:LaRePa</citationkey>
		<title>Patch PlaNet: Landmark Recognition with Patch Classification Using Convolutional Neural Networks</title>
		<format>On-line</format>
		<year>2018</year>
		<numberoffiles>1</numberoffiles>
		<size>1936 KiB</size>
		<author>Cunha, Kelvin Batista,</author>
		<author>Maggi, Lucas,</author>
		<author>Lima, João Paulo,</author>
		<author>Teichrieb, Veronica,</author>
		<author>Quintino, Jonysberg Peixoto,</author>
		<author>da Silva, Fabio Q. B.,</author>
		<author>Santos, Andre L M,</author>
		<author>Pinho, Helder,</author>
		<affiliation>Voxar Labs - Centro de Informática - Universidade Federal de Pernambuco</affiliation>
		<affiliation>Voxar Labs - Centro de Informática - Universidade Federal de Pernambuco</affiliation>
		<affiliation>Voxar Labs - Centro de Informática - Universidade Federal Rural de Pernambuco</affiliation>
		<affiliation>Voxar Labs - Centro de Informática - Universidade Federal de Pernambuco</affiliation>
		<affiliation>Projeto de P&D CIN/Samsung - Universidade Federal de Pernambuco</affiliation>
		<affiliation>Universidade Federal de Pernambuco</affiliation>
		<affiliation>Universidade Federal de Pernambuco</affiliation>
		<affiliation>Samsung Instituto de Desenvolvimento para a Informática</affiliation>
		<editor>Ross, Arun,</editor>
		<editor>Gastal, Eduardo S. L.,</editor>
		<editor>Jorge, Joaquim A.,</editor>
		<editor>Queiroz, Ricardo L. de,</editor>
		<editor>Minetto, Rodrigo,</editor>
		<editor>Sarkar, Sudeep,</editor>
		<editor>Papa, João Paulo,</editor>
		<editor>Oliveira, Manuel M.,</editor>
		<editor>Arbeláez, Pablo,</editor>
		<editor>Mery, Domingo,</editor>
		<editor>Oliveira, Maria Cristina Ferreira de,</editor>
		<editor>Spina, Thiago Vallin,</editor>
		<editor>Mendes, Caroline Mazetto,</editor>
		<editor>Costa, Henrique Sérgio Gutierrez,</editor>
		<editor>Mejail, Marta Estela,</editor>
		<editor>Geus, Klaus de,</editor>
		<editor>Scheer, Sergio,</editor>
		<e-mailaddress>kbc@cin.ufpe.br</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 31 (SIBGRAPI)</conferencename>
		<conferencelocation>Foz do Iguaçu, PR, Brazil</conferencelocation>
		<date>29 Oct.-1 Nov. 2018</date>
		<publisher>IEEE Computer Society</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Full Paper</tertiarytype>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>Landmark Recognition, Convolutional Neural Network, Image-Patch.</keywords>
		<abstract>In this work we address the problem of landmark recognition. We extend PlaNet, a model based on deep neural networks that approaches the problem of landmark recognition as a classification problem and performs the recognition of places around the world. We propose an extension of the PlaNet technique in which we use a voting scheme to perform the classification, dividing the image into previously defined regions and inferring the landmark based on these regions. The prediction of the model depends not only on the information of the features learned by the deep convolutional neural network architecture during training, but also uses local information from each region in the image for which the classification is made. To validate our proposal, we performed the training of the original PlaNet model and our variation using a database built with images from Flickr, and evaluated the models in the Paris and Oxford Buildings datasets. It was possible to notice that the addition of image division and voting structure improves the accuracy result of the model by 5-11 percentage points on average, reducing the level of ambiguity found during the inference of the model.</abstract>
		<language>en</language>
		<targetfile>Patch PlaNet Landmark Recognition with Patch Classification using Convolutional Neural Networks.pdf</targetfile>
		<usergroup>kbc@cin.ufpe.br</usergroup>
		<visibility>shown</visibility>
		<documentstage>not transferred</documentstage>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<nexthigherunit>8JMKD3MGPAW/3RPADUS</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/4742MCS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2018/09.03.20.37 7</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2018/08.27.17.27</url>
	</metadata>
</metadatalist>